• Develop and maintain ETLs from multiple distinct data sources using SQL and a custom data pipeline
• Build data load strategies based on functional requirements using technical best practices
• Pursue data quality, conduct data validation, troubleshoot data issues, and see issues to resolution
• Interview data consumers to understand requirements, develop functional specifications, and complete delivery of new data pipelines
• Seek opportunities to improve the data pipeline and platform, and work with Data Engineers to prioritize, develop, and test these improvements
• Communicate with business data analysts across Intuit to answer technical and functional questions and enable them to access and interpret the data we provide
Skills: Required Skills: 5+ years of experience working on ETL/data pipelines to support analytics use cases Advanced SQL and data warehousing knowledge Expertise working with big datasets and optimizing MPP databases and Hive queries Familiarity with AWS (Redshift, Athena, and AWS core concepts) PySpark/Python experience Clickstream data experience Location: San Diego, Hybrid (Onsite 3 days/week), open to Mountain view, CA Interview Procss: 1 round: 60 minutes will include behavioral and technical questions